96 research outputs found

    Fully Automatic Video Colorization with Self-Regularization and Diversity

    Full text link
    We present a fully automatic approach to video colorization with self-regularization and diversity. Our model contains a colorization network for video frame colorization and a refinement network for spatiotemporal color refinement. Without any labeled data, both networks can be trained with self-regularized losses defined in bilateral and temporal space. The bilateral loss enforces color consistency between neighboring pixels in a bilateral space and the temporal loss imposes constraints between corresponding pixels in two nearby frames. While video colorization is a multi-modal problem, our method uses a perceptual loss with diversity to differentiate various modes in the solution space. Perceptual experiments demonstrate that our approach outperforms state-of-the-art approaches on fully automatic video colorization. The results are shown in the supplementary video at https://youtu.be/Y15uv2jnK-4Comment: Published at the Computer Vision and Pattern Recognition (CVPR), 201

    Robust Reflection Removal with Flash-only Cues in the Wild

    Full text link
    We propose a simple yet effective reflection-free cue for robust reflection removal from a pair of flash and ambient (no-flash) images. The reflection-free cue exploits a flash-only image obtained by subtracting the ambient image from the corresponding flash image in raw data space. The flash-only image is equivalent to an image taken in a dark environment with only a flash on. This flash-only image is visually reflection-free and thus can provide robust cues to infer the reflection in the ambient image. Since the flash-only image usually has artifacts, we further propose a dedicated model that not only utilizes the reflection-free cue but also avoids introducing artifacts, which helps accurately estimate reflection and transmission. Our experiments on real-world images with various types of reflection demonstrate the effectiveness of our model with reflection-free flash-only cues: our model outperforms state-of-the-art reflection removal approaches by more than 5.23dB in PSNR. We extend our approach to handheld photography to address the misalignment between the flash and no-flash pair. With misaligned training data and the alignment module, our aligned model outperforms our previous version by more than 3.19dB in PSNR on a misaligned dataset. We also study using linear RGB images as training data. Our source code and dataset are publicly available at https://github.com/ChenyangLEI/flash-reflection-removal.Comment: Extension of CVPR 2021 paper [arXiv:2103.04273], submitted to TPAMI. Our source code and dataset are publicly available at http://github.com/ChenyangLEI/flash-reflection-remova

    Blind Video Deflickering by Neural Filtering with a Flawed Atlas

    Full text link
    Many videos contain flickering artifacts. Common causes of flicker include video processing algorithms, video generation algorithms, and capturing videos under specific situations. Prior work usually requires specific guidance such as the flickering frequency, manual annotations, or extra consistent videos to remove the flicker. In this work, we propose a general flicker removal framework that only receives a single flickering video as input without additional guidance. Since it is blind to a specific flickering type or guidance, we name this "blind deflickering." The core of our approach is utilizing the neural atlas in cooperation with a neural filtering strategy. The neural atlas is a unified representation for all frames in a video that provides temporal consistency guidance but is flawed in many cases. To this end, a neural network is trained to mimic a filter to learn the consistent features (e.g., color, brightness) and avoid introducing the artifacts in the atlas. To validate our method, we construct a dataset that contains diverse real-world flickering videos. Extensive experiments show that our method achieves satisfying deflickering performance and even outperforms baselines that use extra guidance on a public benchmark.Comment: To appear in CVPR2023. Code: github.com/ChenyangLEI/All-In-One-Deflicker Website: chenyanglei.github.io/deflicke

    A study of sustainable practices in the sustainability leadership of international contractors

    Full text link
    With an increasing global need for sustainable development, numerous world‐leading construction corporations have devoted significant efforts to implementing sustainable practices. However, few previous studies have shared these valuable experiences in a systematic and quantitative way. RobecoSAM has published The Sustainability Yearbook annually since 2004, which lists the sustainability leaders in various industries, including the construction industry. Learning from those sustainability leaders in the construction industry can provide useful references for construction‐related companies when developing their sustainable development strategies. Based on a comprehensive literature review, this paper identified 51 methods used for improving sustainability performance and 34 outcomes achieved via these methods. These methods and outcomes are used for coding the sustainable practices of sustainability leaders in the construction sector. Using the coding system, 133 annual sustainability reports issued by 22 sustainability leaders (The Sustainability Yearbook, RobecoSAM 2010–2016) in the construction sector were analyzed using content analysis. Social network analysis was then employed to identify the key adopted methods and achieved outcomes (KAMAO) of these leaders. The dynamic trend and regional analysis of KAMAO are also presented. These KAMAO findings provide valuable guidance for international contractors to develop a better understanding of the primary sustainable methods adopted by sustainability leaders in the construction sector and the top outcomes achieved by these leaders. The findings also provide a useful reference for international contractors to evaluate their current sustainability‐related strategies and make improvements.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/156206/2/sd2020.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/156206/1/sd2020_am.pd

    An Evaluation on Bridge Bearing Capacity under Scour and Re-occurrence of Strong Earthquake

    Get PDF
    Plagued by frequent calamities, Bridge No.3 encountered magnitude-8 earthquake on May 12, 2008 and several years later its pile foundation was intensively scoured. The smallest scour depth was 4.5 meters and the largest scour depth was 9.2 meters. Considering intense scour and re-occurrence of strong earthquake, the Chinese existing standard and seismic response analysis are used to study bearing capacity and seismic performance of pier and pile foundation of Bridge No.3 before and after scour. It is proved by calculation that the bridge is stable before scour and can hardly bear strong earthquake and intense scour after scour, therefore consolidation is required. The study result may serve as an important reference for the bridge affected by serious scour and strong earthquake

    Thin On-Sensor Nanophotonic Array Cameras

    Full text link
    Today's commodity camera systems rely on compound optics to map light originating from the scene to positions on the sensor where it gets recorded as an image. To record images without optical aberrations, i.e., deviations from Gauss' linear model of optics, typical lens systems introduce increasingly complex stacks of optical elements which are responsible for the height of existing commodity cameras. In this work, we investigate flat nanophotonic computational cameras as an alternative that employs an array of skewed lenslets and a learned reconstruction approach. The optical array is embedded on a metasurface that, at 700~nm height, is flat and sits on the sensor cover glass at 2.5~mm focal distance from the sensor. To tackle the highly chromatic response of a metasurface and design the array over the entire sensor, we propose a differentiable optimization method that continuously samples over the visible spectrum and factorizes the optical modulation for different incident fields into individual lenses. We reconstruct a megapixel image from our flat imager with a learned probabilistic reconstruction method that employs a generative diffusion model to sample an implicit prior. To tackle scene-dependent aberrations in broadband, we propose a method for acquiring paired captured training data in varying illumination conditions. We assess the proposed flat camera design in simulation and with an experimental prototype, validating that the method is capable of recovering images from diverse scenes in broadband with a single nanophotonic layer.Comment: 18 pages, 12 figures, to be published in ACM Transactions on Graphic

    Tailoring surface hydrophilicity of porous electrospun nanofibers to enhance capillary and push-pull effects for moisture wicking

    Get PDF
    In this article, liquid moisture transport behaviors of dual-layer electrospun nanofibrous mats are reported for the first time. The dual-layer mats consist of a thick layer of hydrophilic polyacrylonitrile (PAN) nanofibers with a thin layer of hydrophobic polystyrene (PS) nanofibers with and without interpenetrating nanopores, respectively. The mats are coated with polydopamine (PDOPA) to different extents to tailor the water wettability of the PS layer. It is found that with a large quantity of nanochannels, the porous PS nanofibers exhibit a stronger capillary effect than the solid PS nanofibers. The capillary motion in the porous PS nanofibers can be further enhanced by slight surface modification with PDOPA while retaining the large hydrophobicity difference between the two layers, inducing a strong push–pull effect to transport water from the PS to the PAN layer
    • 

    corecore